2,810 research outputs found

    Efficient polarization entanglement purification based on parametric down-conversion sources with cross-Kerr nonlinearity

    Full text link
    We present a way for entanglement purification based on two parametric down-conversion (PDC) sources with cross-Kerr nonlinearities. It is comprised of two processes. The first one is a primary entanglement purification protocol for PDC sources with nondestructive quantum nondemolition (QND) detectors by transferring the spatial entanglement of photon pairs to their polarization. In this time, the QND detectors act as the role of controlled-not (CNot) gates. Also they can distinguish the photon number of the spatial modes, which provides a good way for the next process to purify the entanglement of the photon pairs kept more. In the second process for entanglement purification, new QND detectors are designed to act as the role of CNot gates. This protocol has the advantage of high yield and it requires neither CNot gates based on linear optical elements nor sophisticated single-photon detectors, which makes it more convenient in practical applications.Comment: 8 pages, 7 figure

    A splitting semi-implicit method for stochastic incompressible Euler equations on T2\mathbb T^2

    Full text link
    The main difficulty in studying numerical method for stochastic evolution equations (SEEs) lies in the treatment of the time discretization (J. Printems. [ESAIM Math. Model. Numer. Anal. (2001)]). Although fruitful results on numerical approximations for SEEs have been developed, as far as we know, none of them include that of stochastic incompressible Euler equations. To bridge this gap, this paper proposes and analyses a splitting semi-implicit method in temporal direction for stochastic incompressible Euler equations on torus T2\mathbb{T}^2 driven by an additive noise. By a Galerkin approximation and the fixed point technique, we establish the unique solvability of the proposed method. Based on the regularity estimates of both exact and numerical solutions, we measure the error in L2(T2)L^2(\mathbb{T}^2) and show that the pathwise convergence order is nearly 12\frac{1}{2} and the convergence order in probability is almost 11

    NLO QCD corrections to Single Top and W associated production at the LHC with forward detector acceptances

    Full text link
    In this paper we study the Single Top and W boson associated photoproduction via the main reaction pp→pγp→pW±t+Y\rm pp\rightarrow p\gamma p\rightarrow pW^{\pm}t+Y at the 14 TeV Large Hadron Collider (LHC) up to next-to-leading order (NLO) QCD level assuming a typical LHC multipurpose forward detector. We use the Five-Flavor-Number Schemes (5FNS) with massless bottom quark assumption in the whole calculation. Our results show that the QCD NLO corrections can reduce the scale uncertainty. The typical K-factors are in the range of 1.15 to 1.2 which lead to the QCD NLO corrections of 15% to 20% correspond to the leading-order (LO) predictions with our chosen parameters.Comment: 41pages, 12figures. arXiv admin note: text overlap with arXiv:1106.2890 by other author

    Sound and Fine-grain Specification of Ideal Functionalities

    Get PDF
    Nowadays it is widely accepted to formulate the security of a protocol carrying out a given task via the "trusted-party paradigm," where the protocol execution is compared with an ideal process where the outputs are computed by a trusted party that sees all the inputs. A protocol is said to securely carry out a given task if running the protocol with a realistic adversary amounts to "emulating" the ideal process with the appropriate trusted party. In the Universal Composability (UC) framework the program run by the trusted party is called an ideal functionality. While this simulation-based security formulation provides strong security guarantees, its usefulness is contingent on the properties and correct specification of the ideal functionality, which, as demonstrated in recent years by the coexistence of complex, multiple functionalities for the same task as well as by their "unstable" nature, does not seem to be an easy task. In this paper we address this problem, by introducing a general methodology for the sound specification of ideal functionalities. First, we introduce the class of canonical ideal functionalities for a cryptographic task, which unifies the syntactic specification of a large class of cryptographic tasks under the same basic template functionality. Furthermore, this representation enables the isolation of the individual properties of a cryptographic task as separate members of the corresponding class. By endowing the class of canonical functionalities with an algebraic structure we are able to combine basic functionalities to a single final canonical functionality for a given task. Effectively, this puts forth a bottom-up approach for the specification of ideal functionalities: first one defines a set of basic constituent functionalities for the task at hand, and then combines them into a single ideal functionality taking advantage of the algebraic structure. In our framework, the constituent functionalities of a task can be derived either directly or, following a translation strategy we introduce, from existing game-based definitions; such definitions have in many cases captured desired individual properties of cryptographic tasks, albeit in less adversarial settings than universal composition. Our translation methodology entails a sequence of steps that derive a corresponding canonical functionality given a game-based definition. In this way, we obtain a well-defined mapping of game-based security properties to their corresponding UC counterparts. Finally, we demonstrate the power of our approach by applying our methodology to a variety of basic cryptographic tasks, including commitments, digital signatures, zero-knowledge proofs, and oblivious transfer. While in some cases our derived canonical functionalities are equivalent to existing formulations, thus attesting to the validity of our approach, in others they differ, enabling us to "debug" previous definitions and pinpoint their shortcomings

    Charged lepton flavor violating Higgs decays at future e+e−e^+e^- colliders

    Full text link
    After the discovery of the Higgs boson, several future experiments have been proposed to study the Higgs boson properties, including two circular lepton colliders, the CEPC and the FCC-ee, and one linear lepton collider, the ILC. We evaluate the precision reach of these colliders in measuring the branching ratios of the charged lepton flavor violating Higgs decays H→e±μ∓H\to e^\pm\mu^\mp, e±τ∓e^\pm\tau^\mp and μ±τ∓\mu^\pm\tau^\mp. The expected upper bounds on the branching ratios given by the circular (linear) colliders are found to be B(H→e±μ∓)<1.2 (2.1)×10−5\mathcal{B}(H\to e^\pm\mu^\mp) < 1.2\ (2.1) \times 10^{-5}, B(H→e±τ∓)<1.6 (2.4)×10−4\mathcal{B}(H\to e^\pm\tau^\mp) < 1.6\ (2.4) \times 10^{-4} and B(H→μ±τ∓)<1.4 (2.3)×10−4\mathcal{B}(H\to \mu^\pm\tau^\mp) < 1.4\ (2.3) \times 10^{-4} at 95\% CL, which are improved by one to two orders compared to the current experimental bounds. We also discuss the constraints that these upper bounds set on certain theory parameters, including the charged lepton flavor violating Higgs couplings, the corresponding parameters in the type-III 2HDM, and the new physics cut-off scales in the SMEFT, in RS models and in models with heavy neutrinos.Comment: 20 pages, 2 figures (extend the CEPC study to the FCC-ee and the ILC, and to match the published version
    • …
    corecore